Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25.829
Filtrar
1.
J Drugs Dermatol ; 23(5): e132-e133, 2024 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-38709690

RESUMEN

Skin self-examinations play a vital role in skin cancer detection and are often aided by online resources. Available reference photos must display the full spectrum of skin tones so patients may visualize how skin lesions can appear. This study investigated the portrayal of skin tones in skin cancer-related Google Images, discovering a significant underrepresentation of darker skin tones. J Drugs Dermatol. 2024;23(5):e132-e133.     doi:10.36849/JDD.7886e.


Asunto(s)
Neoplasias Cutáneas , Pigmentación de la Piel , Humanos , Neoplasias Cutáneas/diagnóstico , Neoplasias Cutáneas/patología , Fotograbar , Autoexamen/métodos , Piel/patología , Internet , Motor de Búsqueda
2.
J Drugs Dermatol ; 23(5): e137-e138, 2024 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-38709691

RESUMEN

When patients self-detect suspicious skin lesions, they often reference online photos prior to seeking medical evaluation. Online images must be available in the full spectrum of skin tones to provide accurate visualizations of disease, especially given the increased morbidity and mortality from skin cancer in patients with darker skin tones. The purpose of this study was to evaluate the representation of skin tones in photos of skin cancer on patient-facing websites. Six federally-based and organization websites were evaluated, and of the 372 total representations identified only 49 depicted darker skin tones (13.2%). This highlights the need to improve skin tone representation on patient-facing online resources. J Drugs Dermatol. 2024;23(5):e137-e138.     doi:10.36849/JDD.7905e.


Asunto(s)
Internet , Educación del Paciente como Asunto , Neoplasias Cutáneas , Pigmentación de la Piel , Humanos , Neoplasias Cutáneas/diagnóstico , Educación del Paciente como Asunto/métodos , Fotograbar , Piel
3.
Skin Res Technol ; 30(5): e13690, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38716749

RESUMEN

BACKGROUND: The response of AI in situations that mimic real life scenarios is poorly explored in populations of high diversity. OBJECTIVE: To assess the accuracy and validate the relevance of an automated, algorithm-based analysis geared toward facial attributes devoted to the adornment routines of women. METHODS: In a cross-sectional study, two diversified groups presenting similar distributions such as age, ancestry, skin phototype, and geographical location was created from the selfie images of 1041 female in a US population. 521 images were analyzed as part of a new training dataset aimed to improve the original algorithm and 520 were aimed to validate the performance of the AI. From a total 23 facial attributes (16 continuous and 7 categorical), all images were analyzed by 24 make-up experts and by the automated descriptor tool. RESULTS: For all facial attributes, the new and the original automated tool both surpassed the grading of the experts on a diverse population of women. For the 16 continuous attributes, the gradings obtained by the new system strongly correlated with the assessment made by make-up experts (r ≥ 0.80; p < 0.0001) and supported by a low error rate. For the seven categorical attributes, the overall accuracy of the AI-facial descriptor was improved via enrichment of the training dataset. However, some weaker performance in spotting specific facial attributes were noted. CONCLUSION: In conclusion, the AI-automatic facial descriptor tool was deemed accurate for analysis of facial attributes for diverse women although some skin complexion, eye color, and hair features required some further finetuning.


Asunto(s)
Algoritmos , Cara , Humanos , Femenino , Estudios Transversales , Adulto , Cara/anatomía & histología , Cara/diagnóstico por imagen , Estados Unidos , Persona de Mediana Edad , Adulto Joven , Fotograbar , Reproducibilidad de los Resultados , Inteligencia Artificial , Adolescente , Anciano , Pigmentación de la Piel/fisiología
4.
J Vis ; 24(5): 1, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38691088

RESUMEN

Still life paintings comprise a wealth of data on visual perception. Prior work has shown that the color statistics of objects show a marked bias for warm colors. Here, we ask about the relative chromatic contrast of these object-associated colors compared with background colors in still life paintings. We reasoned that, owing to the memory color effect, where the color of familiar objects is perceived more saturated, warm colors will be relatively more saturated than cool colors in still life paintings as compared with photographs. We analyzed color in 108 slides of still life paintings of fruit from the teaching slide collection of the Fogg University Art Museum and 41 color-calibrated photographs of fruit from the McGill data set. The results show that the relatively higher chromatic contrast of warm colors was greater for paintings compared with photographs, consistent with the hypothesis.


Asunto(s)
Percepción de Color , Frutas , Pinturas , Fotograbar , Humanos , Percepción de Color/fisiología , Fotograbar/métodos , Color , Sensibilidad de Contraste/fisiología
5.
Nutrients ; 16(9)2024 Apr 26.
Artículo en Inglés | MEDLINE | ID: mdl-38732541

RESUMEN

Nuts are nutrient-dense foods and can be incorporated into a healthy diet. Artificial intelligence-powered diet-tracking apps may promote nut consumption by providing real-time, accurate nutrition information but depend on data and model availability. Our team developed a dataset comprising 1380 photographs, each in RGB color format and with a resolution of 4032 × 3024 pixels. These images feature 11 types of nuts that are commonly consumed. Each photo includes three nut types; each type consists of 2-4 nuts, so 6-9 nuts are in each image. Rectangular bounding boxes were drawn using a visual geometry group (VGG) image annotator to facilitate the identification of each nut, delineating their locations within the images. This approach renders the dataset an excellent resource for training models capable of multi-label classification and object detection, as it was meticulously divided into training, validation, and test subsets. Utilizing transfer learning in Python with the IceVision framework, deep neural network models were adeptly trained to recognize and pinpoint the nuts depicted in the photographs. The ultimate model exhibited a mean average precision of 0.7596 in identifying various nut types within the validation subset and demonstrated a 97.9% accuracy rate in determining the number and kinds of nuts present in the test subset. By integrating specific nutritional data for each type of nut, the model can precisely (with error margins ranging from 0.8 to 2.6%) calculate the combined nutritional content-encompassing total energy, proteins, carbohydrates, fats (total and saturated), fiber, vitamin E, and essential minerals like magnesium, phosphorus, copper, manganese, and selenium-of the nuts shown in a photograph. Both the dataset and the model have been made publicly available to foster data exchange and the spread of knowledge. Our research underscores the potential of leveraging photographs for automated nut calorie and nutritional content estimation, paving the way for the creation of dietary tracking applications that offer real-time, precise nutritional insights to encourage nut consumption.


Asunto(s)
Redes Neurales de la Computación , Valor Nutritivo , Nueces , Fotograbar , Humanos , Aprendizaje Profundo , Nutrientes/análisis
6.
Sensors (Basel) ; 24(9)2024 Apr 26.
Artículo en Inglés | MEDLINE | ID: mdl-38732872

RESUMEN

This paper presents an experimental evaluation of a wearable light-emitting diode (LED) transmitter in an optical camera communications (OCC) system. The evaluation is conducted under conditions of controlled user movement during indoor physical exercise, encompassing both mild and intense exercise scenarios. We introduce an image processing algorithm designed to identify a template signal transmitted by the LED and detected within the image. To enhance this process, we utilize the dynamics of controlled exercise-induced motion to limit the tracking process to a smaller region within the image. We demonstrate the feasibility of detecting the transmitting source within the frames, and thus limit the tracking process to a smaller region within the image, achieving an reduction of 87.3% for mild exercise and 79.0% for intense exercise.


Asunto(s)
Algoritmos , Ejercicio Físico , Dispositivos Electrónicos Vestibles , Humanos , Ejercicio Físico/fisiología , Procesamiento de Imagen Asistido por Computador/métodos , Fotograbar/instrumentación , Fotograbar/métodos , Atención a la Salud
7.
Ann Med ; 56(1): 2352018, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38738798

RESUMEN

BACKGROUND: Diabetic retinopathy (DR) is a common complication of diabetes and may lead to irreversible visual loss. Efficient screening and improved treatment of both diabetes and DR have amended visual prognosis for DR. The number of patients with diabetes is increasing and telemedicine, mobile handheld devices and automated solutions may alleviate the burden for healthcare. We compared the performance of 21 artificial intelligence (AI) algorithms for referable DR screening in datasets taken by handheld Optomed Aurora fundus camera in a real-world setting. PATIENTS AND METHODS: Prospective study of 156 patients (312 eyes) attending DR screening and follow-up. Both papilla- and macula-centred 50° fundus images were taken from each eye. DR was graded by experienced ophthalmologists and 21 AI algorithms. RESULTS: Most eyes, 183 out of 312 (58.7%), had no DR and mild NPDR was noted in 21 (6.7%) of the eyes. Moderate NPDR was detected in 66 (21.2%) of the eyes, severe NPDR in 1 (0.3%), and PDR in 41 (13.1%) composing a group of 34.6% of eyes with referable DR. The AI algorithms achieved a mean agreement of 79.4% for referable DR, but the results varied from 49.4% to 92.3%. The mean sensitivity for referable DR was 77.5% (95% CI 69.1-85.8) and specificity 80.6% (95% CI 72.1-89.2). The rate for images ungradable by AI varied from 0% to 28.2% (mean 1.9%). Nineteen out of 21 (90.5%) AI algorithms resulted in grading for DR at least in 98% of the images. CONCLUSIONS: Fundus images captured with Optomed Aurora were suitable for DR screening. The performance of the AI algorithms varied considerably emphasizing the need for external validation of screening algorithms in real-world settings before their clinical application.


What is already known on this topic? Diabetic retinopathy (DR) is a common complication of diabetes. Efficient screening and timely treatment are important to avoid the development of sight-threatening DR. The increasing number of patients with diabetes and DR poses a challenge for healthcare.What this study adds? Telemedicine, mobile handheld devices and artificial intelligence (AI)-based automated algorithms are likely to alleviate the burden by improving efficacy of DR screening programs. Reliable algorithms of high quality exist despite the variability between the solutions.How this study might affect research, practice or policy? AI algorithms improve the efficacy of screening and might be implemented to clinical use after thorough validation in a real-life setting.


Asunto(s)
Algoritmos , Inteligencia Artificial , Retinopatía Diabética , Fondo de Ojo , Humanos , Retinopatía Diabética/diagnóstico , Retinopatía Diabética/diagnóstico por imagen , Femenino , Estudios Prospectivos , Persona de Mediana Edad , Masculino , Anciano , Adulto , Fotograbar/instrumentación , Tamizaje Masivo/métodos , Tamizaje Masivo/instrumentación , Sensibilidad y Especificidad
8.
Appetite ; 198: 107377, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-38679064

RESUMEN

Most instruments measuring nutrition literacy evaluate theoretical knowledge, not necessarily reflecting skills relevant to food choices. We aimed to develop and validate a photograph-based instrument to assess nutrition literacy (NUTLY) among adults in Portugal. NUTLY assesses the ability to distinguish foods with different nutritional profiles; from each of several combinations of three photographs (two foods with similar contents and one with higher content) participants are asked to identify the food with the highest energy/sodium content. The NUTLY version with 79 combinations, obtained after experts/lay people evaluations, was applied to a sample representing different age, gender and education groups (n = 329). Dimensionality was evaluated through latent trait models. Combinations with negative or with positive small factor loadings were excluded after critical assessment. Internal consistency was measured using Cronbach's alpha and construct validity by comparing NUTLY scores with those obtained in the Medical Term Recognition Test and the Newest Vital Sign (NVS), and across education and training in nutrition/health groups. The cut-off to distinguish adequate/inadequate nutrition literacy was defined through ROC analysis using the Youden index criterion, after performing a Latent class analysis which identified a two-class model to have the best goodness of fit. Test-retest reliability was assessed after one month (n = 158). The final NUTLY scale was unidimensional and included 48 combinations (energy: 33; sodium: 15; α = 0.74). Mean scores (±standard deviation) were highest among nutritionists (39.9 ± 4.4), followed by health professionals (38.5 ± 4.1) and declined with decreasing education (p < 0.001). Those with adequate nutrition literacy according to NVS showed higher NUTLY scores (37.9 ± 4.3 vs. 33.9 ± 6.9, p < 0.001). Adequate nutrition literacy was defined as a NUTLY score≥35 (sensitivity: 89.3%; specificity: 93.7%). Test-retest reliability was high (ICC = 0.77). NUTLY is a valid and reliable nutrition literacy measurement tool.


Asunto(s)
Alfabetización en Salud , Fotograbar , Humanos , Femenino , Masculino , Adulto , Reproducibilidad de los Resultados , Portugal , Persona de Mediana Edad , Adulto Joven , Conocimientos, Actitudes y Práctica en Salud , Anciano , Encuestas y Cuestionarios/normas , Adolescente
9.
BMC Psychol ; 12(1): 233, 2024 Apr 25.
Artículo en Inglés | MEDLINE | ID: mdl-38664723

RESUMEN

BACKGROUND: Organizational accounts of social networking sites (SNSs) are similar to individual accounts in terms of their online behaviors. Thus, they can be investigated from the perspective of personality, as individual accounts have been in the literature. Focusing on startups' Instagram accounts, this study aimed to investigate the characteristics of Big Five personality traits and the relationships between the traits and the characteristics of photos in organizational SNS accounts. METHODS: The personality traits of 108 startups' accounts were assessed with an online artificial intelligence service, and a correspondence analysis was performed to identify the key dimensions where the account were distributed by their personality. Photo features were extracted at the content and pixel levels, and correlational analyses between personality traits and photo features were conducted. Moreover, predictive analyses were performed using random forest regression models. RESULTS: The results indicated that personality of the accounts had high openness, agreeableness, and conscientiousness and moderate extraversion and neuroticism. In addition, the two dimensions of high vs. low in neuroticism and extraversion/openness vs. conscientiousness/agreeableness in the accounts' distribution by their personality traits were identified. Conscientiousness was the trait most associated with photo features-in particular, with content category, pixel-color, and visual features, while agreeableness was the trait least associated with photo features. Neuroticism was mainly correlated with pixel-level features, openness was correlated mainly with pixel-color features, and extraversion was correlated mainly with facial features. The personality traits, except neuroticism, were predicted from the photo features. CONCLUSIONS: This study applied the theoretical lens of personality, which has been mainly used to examine individuals' behaviors, to investigate the SNS communication of startups. Moreover, it focused on the visual communication of organizational accounts, which has not been actively studied in the literature. This study has implications for expanding the realm of personality research to organizational SNS accounts.


Asunto(s)
Personalidad , Fotograbar , Medios de Comunicación Sociales , Humanos , Adulto , Masculino , Femenino , Inteligencia Artificial , Neuroticismo
10.
Nature ; 628(8009): 922, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38637710
11.
Technol Cult ; 65(1): 1-5, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38661791

RESUMEN

The cover of this issue of Technology and Culture illustrates how China implemented-and promoted-on-the-job training in Africa. The image shows a Tanzanian dentist practicing dentistry under the supervision of a Chinese doctor in rural Tanzania, probably in the 1970s. Despite the ineffectiveness of the on-the-job training model, the photograph attempts to project the success of the dental surgery techniques exchanged between China and Tanzania, using simple medical equipment rather than sophisticated medical knowledge. The rural setting reflects the ideological struggle of the Cold War era, when Chinese doctors and rural mobile clinics sought to save lives in the countryside, while doctors from other countries engaged in Cold War competition worked primarily in cities. This essay argues that images were essential propaganda tools during the Cold War and urges historians of technology to use images critically by considering the contexts that influenced their creation.


Asunto(s)
Capacitación en Servicio , China , Historia del Siglo XX , Humanos , Capacitación en Servicio/historia , Tanzanía , Servicios de Salud Rural/historia , Fotograbar/historia
12.
Transl Vis Sci Technol ; 13(4): 1, 2024 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-38564203

RESUMEN

Purpose: The purpose of this study was to develop a deep learning algorithm, to detect retinal breaks and retinal detachments on ultra-widefield fundus (UWF) optos images using artificial intelligence (AI). Methods: Optomap UWF images of the database were annotated to four groups by two retina specialists: (1) retinal breaks without detachment, (2) retinal breaks with retinal detachment, (3) retinal detachment without visible retinal breaks, and (4) a combination of groups 1 to 3. The fundus image data set was split into a training set and an independent test set following an 80% to 20% ratio. Image preprocessing methods were applied. An EfficientNet classification model was trained with the training set and evaluated with the test set. Results: A total of 2489 UWF images were included into the dataset, resulting in a training set size of 2008 UWF images and a test set size of 481 images. The classification models achieved an area under the receiver operating characteristic curve (AUC) on the testing set of 0.975 regarding lesion detection, an AUC of 0.972 for retinal detachment and an AUC of 0.913 for retinal breaks. Conclusions: A deep learning system to detect retinal breaks and retinal detachment using UWF images is feasible and has a good specificity. This is relevant for clinical routine as there can be a high rate of missed breaks in clinics. Future clinical studies will be necessary to evaluate the cost-effectiveness of applying such an algorithm as an automated auxiliary tool in a large practices or tertiary referral centers. Translational Relevance: This study demonstrates the relevance of applying AI in diagnosing peripheral retinal breaks in clinical routine in UWF fundus images.


Asunto(s)
Aprendizaje Profundo , Desprendimiento de Retina , Perforaciones de la Retina , Humanos , Desprendimiento de Retina/diagnóstico , Inteligencia Artificial , Fotograbar
13.
Cutis ; 113(3): 141-142, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38648596

RESUMEN

Precise wound approximation during cutaneous suturing is of vital importance for optimal closure and long-term scar outcomes. Utilizing smartphone camera technology as a quality-control checkpoint for objective evaluation allows the dermatologic surgeon to scrutinize the wound edges and refine their surgical technique to improve scar outcomes.


Asunto(s)
Cicatriz , Teléfono Inteligente , Técnicas de Sutura , Humanos , Técnicas de Sutura/instrumentación , Fotograbar , Procedimientos Quirúrgicos Dermatologicos/instrumentación , Procedimientos Quirúrgicos Dermatologicos/métodos , Epidermis
14.
PLoS One ; 19(4): e0298285, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38573887

RESUMEN

For many species, population sizes are unknown despite their importance for conservation. For population size estimation, capture-mark-recapture (CMR) studies are often used, which include the necessity to identify each individual, mostly through individual markings or genetic characters. Invasive marking techniques, however, can negatively affect the individual fitness. Alternatives are low-impact techniques such as the use of photos for individual identification, for species with stable distinctive phenotypic traits. For the individual identification of photos, a variety of different software, with different requirements, is available. The European fire salamander (Salamandra salamandra) is a species in which individuals, both at the larval stage and as adults, have individual specific patterns that allow for individual identification. In this study, we compared the performance of five different software for the use of photographic identification for the European fire salamander: Amphibian & Reptile Wildbook (ARW), AmphIdent, I3S pattern+, ManderMatcher and Wild-ID. While adults can be identified by all five software, European fire salamander larvae can currently only be identified by two of the five (ARW and Wild-ID). We used one dataset of European fire salamander larval pictures taken in the laboratory and tested this dataset in two of the five software (ARW and Wild-ID). We used another dataset of European fire salamander adult pictures taken in the field and tested this using all five software. We compared the requirements of all software on the pictures used and calculated the False Rejection Rate (FRR) and the Recognition Rate (RR). For the larval dataset (421 pictures) we found that the ARW and Wild-ID performed equally well for individual identification (99.6% and 100% Recognition Rate, respectively). For the adult dataset (377 pictures), we found the best False Rejection Rate in ManderMatcher and the highest Recognition Rate in the ARW. Additionally, the ARW is the only program that requires no image pre-processing. In times of amphibian declines, non-invasive photo identification software allowing capture-mark-recapture studies help to gain knowledge on population sizes, distribution, movement and demography of a population and can thus help to support species conservation.


Asunto(s)
Salamandra , Humanos , Animales , Larva , Fenotipo , Fotograbar , Programas Informáticos
15.
Vet Rec ; 194(9): e4088, 2024 05 04.
Artículo en Inglés | MEDLINE | ID: mdl-38637964

RESUMEN

BACKGROUND: Ophthalmoscopy is a valuable tool in clinical practice. We report the use of a novel smartphone-based handheld device for visualisation and photo-documentation of the ocular fundus in veterinary medicine. METHODS: Selected veterinary patients of a referral ophthalmology service were included if one or both eyes had clear ocular media, allowing for examination of the fundus. Following pharmacological mydriasis, fundic images were obtained with a handheld fundus camera (Volk VistaView). For comparison, the fundus of a subset of animals was also imaged with a veterinary-specific fundus camera (Optomed Smartscope VET2). RESULTS: The large field of view achieved by the Volk VistaView allowed for rapid and thorough observation of the ocular fundus in animals, providing a tool to visualise and record common pathologies of the posterior segment. Captured fundic images were sometimes overexposed, with the tapetal fundus artificially appearing hyperreflective when using the Volk VistaView camera, a finding that was less frequent when activating a 'veterinary mode' that reduced the sensitivity of the camera's sensor. The Volk VistaView compared well with the Optomed Smartscope VET2. LIMITATION: The main study limitation was the small sample size. CONCLUSIONS: The Volk VistaView camera was easy to use and provided good-quality fundic images in veterinary patients with healthy or diseased eyes, offering a wide field of view that was ideal for screening purposes.


Asunto(s)
Enfermedades de la Retina , Teléfono Inteligente , Medicina Veterinaria , Animales , Enfermedades de la Retina/veterinaria , Enfermedades de la Retina/diagnóstico , Medicina Veterinaria/instrumentación , Oftalmoscopía/veterinaria , Oftalmoscopía/métodos , Fondo de Ojo , Fotograbar/veterinaria , Fotograbar/instrumentación , Perros , Enfermedades de los Perros/diagnóstico , Gatos
16.
Meat Sci ; 213: 109503, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38579510

RESUMEN

This study aims to describe the meat quality of young Holstein (HOL) beef-on-dairy heifers and bulls sired by Angus (ANG, n = 109), Charolais (CHA, n = 101) and Danish Blue (DBL, n = 127), and to investigate the performance of the handheld vision-based Q-FOM™ Beef camera in predicting the intramuscular fat concentration (IMF%) in M. longissimus thoracis from carcasses quartered at the 5th-6th thoracic vertebra. The results showed significant differences between crossbreeds and sexes on carcass characteristics and meat quality. DBL × HOL had the highest EUROP conformation scores, whereas ANG × HOL had darker meat with higher IMF% (3.52%) compared to CHA × HOL (2.99%) and DBL × HOL (2.51%). Bulls had higher EUROP conformation scores than heifers, and heifers had higher IMF% (3.70%) than bulls (2.31%). These findings indicate the potential for producing high-quality meat from beef-on-dairy heifers and ANG bulls. The IMF% prediction model for Q-FOM performed well with R2 = 0.91 and root mean squared error of cross validation, RMSECV = 1.33%. The performance of the prediction model on the beef-on-dairy veal subsample ranging from 0.9 to 7.4% IMF had lower accuracy (R2 = 0.48) and the prediction error (RMSEveal) was 1.00%. When grouping beef-on-dairy veal carcasses into three IMF% classes (2.5% IMF bins), 62.6% of the carcasses were accurately predicted. Furthermore, Q-FOM IMF% predictions and chemically determined IMF% were similar for each combination of sex and crossbreed, revealing a potential of Q-FOM IMF% predictions to be used in breeding, when aiming for higher meat quality.


Asunto(s)
Tejido Adiposo , Músculo Esquelético , Carne Roja , Vértebras Torácicas , Animales , Bovinos , Masculino , Carne Roja/análisis , Femenino , Tejido Adiposo/química , Músculo Esquelético/química , Fotograbar , Color , Cruzamiento
17.
Meat Sci ; 213: 109500, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38582006

RESUMEN

The objective of this study was to develop calibration models against rib eye traits and independently validate the precision, accuracy, and repeatability of the Frontmatec Q-FOM™ Beef grading camera in Australian carcasses. This study compiled 12 different research datasets acquired from commercial processing facilities and were comprised of a diverse range of carcass phenotypes, graded by industry identified expert Meat Standards Australia (MSA) graders and sampled for chemical intramuscular fat (IMF%). Calibration performance was maintained when the device was independently validated. For continuous traits, the Q-FOM™ demonstrated precise (root mean squared error of prediction, RMSEP) and accurate (coefficient of determination, R2) prediction of eye muscle area (EMA) (R2 = 0.89, RMSEP = 4.3 cm2, slope = 0.96, bias = 0.7), MSA marbling (R2 = 0.95, RMSEP = 47.2, slope = 0.98, bias = -12.8) and chemical IMF% (R2 = 0.94, RMSEP = 1.56%, slope = 0.96, bias = 0.64). For categorical traits, the Q-FOM™ predicted 61%, 64.3% and 60.8% of AUS-MEAT marbling, meat colour and fat colour scores equivalent, and 95% within ±1 classes of expert grader scores. The Q-FOM™ also demonstrated very high repeatability and reproducibility across all traits.


Asunto(s)
Tejido Adiposo , Color , Músculo Esquelético , Fotograbar , Carne Roja , Animales , Australia , Bovinos , Carne Roja/análisis , Carne Roja/normas , Fotograbar/métodos , Calibración , Fenotipo , Reproducibilidad de los Resultados , Costillas
18.
Nature ; 628(8008): 563-568, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38600379

RESUMEN

More than a quarter of the world's tropical forests are exploited for timber1. Logging impacts biodiversity in these ecosystems, primarily through the creation of forest roads that facilitate hunting for wildlife over extensive areas. Forest management certification schemes such as the Forest Stewardship Council (FSC) are expected to mitigate impacts on biodiversity, but so far very little is known about the effectiveness of FSC certification because of research design challenges, predominantly limited sample sizes2,3. Here we provide this evidence by using 1.3 million camera-trap photos of 55 mammal species in 14 logging concessions in western equatorial Africa. We observed higher mammal encounter rates in FSC-certified than in non-FSC logging concessions. The effect was most pronounced for species weighing more than 10 kg and for species of high conservation priority such as the critically endangered forest elephant and western lowland gorilla. Across the whole mammal community, non-FSC concessions contained proportionally more rodents and other small species than did FSC-certified concessions. The first priority for species protection should be to maintain unlogged forests with effective law enforcement, but for logged forests our findings provide convincing data that FSC-certified forest management is less damaging to the mammal community than is non-FSC forest management. This study provides strong evidence that FSC-certified forest management or equivalently stringent requirements and controlling mechanisms should become the norm for timber extraction to avoid half-empty forests dominated by rodents and other small species.


Asunto(s)
Certificación , Agricultura Forestal , Bosques , Mamíferos , Animales , África Occidental , Biodiversidad , Peso Corporal , Conservación de los Recursos Naturales/legislación & jurisprudencia , Conservación de los Recursos Naturales/métodos , Elefantes , Agricultura Forestal/legislación & jurisprudencia , Agricultura Forestal/métodos , Agricultura Forestal/normas , Gorilla gorilla , Mamíferos/anatomía & histología , Mamíferos/clasificación , Mamíferos/fisiología , Fotograbar , Roedores , Masculino , Femenino
19.
Ecology ; 105(5): e4298, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38610092

RESUMEN

Camera traps became the main observational method of a myriad of species over large areas. Data sets from camera traps can be used to describe the patterns and monitor the occupancy, abundance, and richness of wildlife, essential information for conservation in times of rapid climate and land-cover changes. Habitat loss and poaching are responsible for historical population losses of mammals in the Atlantic Forest biodiversity hotspot, especially for medium to large-sized species. Here we present a data set from camera trap surveys of medium to large-sized native mammals (>1 kg) across the Atlantic Forest. We compiled data from 5380 ground-level camera trap deployments in 3046 locations, from 2004 to 2020, resulting in 43,068 records of 58 species. These data add to existing data sets of mammals in the Atlantic Forest by including dates of camera operation needed for analyses dealing with imperfect detection. We also included, when available, information on important predictors of detection, namely the camera brand and model, use of bait, and obstruction of camera viewshed that can be measured from example pictures at each camera location. Besides its application in studies on the patterns and mechanisms behind occupancy, relative abundance, richness, and detection, the data set presented here can be used to study species' daily activity patterns, activity levels, and spatiotemporal interactions between species. Moreover, data can be used combined with other data sources in the multiple and expanding uses of integrated population modeling. An R script is available to view summaries of the data set. We expect that this data set will be used to advance the knowledge of mammal assemblages and to inform evidence-based solutions for the conservation of the Atlantic Forest. The data are not copyright restricted; please cite this paper when using the data.


As armadilhas fotográficas tornaram­se o principal método de observação de muitas espécies em grandes áreas. Os dados obtidos com armadilhas fotográficas podem ser usados para descrever os padrões e monitorar a ocupação, abundância e riqueza da vida selvagem, informação essencial para a conservação em tempos de rápidas mudanças climáticas e de cobertura do solo. A perda de habitat e a caça furtiva são responsáveis pelas perdas populacionais históricas de mamíferos no hotspot de biodiversidade da Mata Atlântica, especialmente para espécies de médio e grande porte. Aqui apresentamos um conjunto de dados de levantamentos com armadilhas fotográficas de mamíferos de médio e grande porte (>1 kg) em toda a Mata Atlântica. Compilamos dados de 5.380 armadilhas fotográficas instaladas no nível do chão em 3.046 locais, de 2004 a 2020, resultando em 43.068 registros de 58 espécies. Esses dados acrescentam aos conjuntos de dados existentes de mamíferos na Mata Atlântica por incluir as datas de operação das câmeras, que são necessárias para análises que lidam com detecção imperfeita. Também incluímos, quando disponíveis, informações sobre importantes preditores de detecção, como marca e modelo da câmera, uso de isca e obstrução do visor da câmera que pode ser medido a partir de imagens de exemplo em cada local da câmera. Além de estudos sobre os padrões e mecanismos por trás da ocupação, abundância relativa, riqueza e detecção, o conjunto de dados aqui apresentado pode ser usado para estudar os padrões de atividade diária das espécies, nível de atividade e interações espaço­temporais entre as espécies. Além disso, os dados podem ser usados em combinação com outras fontes de dados em diversas análises com modelagem populacional integrada. Um script R está disponível para visualizar um resumo do conjunto de dados. Esperamos que este conjunto de dados seja usado para aumentar o conhecimento sobre as assembleias de mamíferos e usado para informar soluções baseadas em evidências para a conservação da Mata Atlântica. Os dados não são restritos por direitos autorais e, por favor, cite este documento ao usar os dados.


Asunto(s)
Bosques , Mamíferos , Mamíferos/fisiología , Animales , Fotograbar , Biodiversidad , Conservación de los Recursos Naturales/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA